Learning SVM from Distributed, Non-Linearly Separable Datasets with Kernel Methods

نویسندگان

چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Optimal SVM parameter selection for non-separable and unbalanced datasets.

This article presents a study of three validation metrics used for the selection of optimal parameters of a support vector machine (SVM) classifier in the case of non-separable and unbalanced datasets. This situation is often encountered when the data is obtained experimentally or clinically. The three metrics selected in this work are the area under the ROC curve (AUC), accuracy, and balanced ...

متن کامل

MinSVM for Linearly Separable and Imbalanced Datasets

Class imbalance (CI) is common in most non synthetic datasets, which presents a major challenge for many classification algorithms geared towards optimized overall accuracy whenever the minority class risk loss is often higher than the majority class one. Support vector machine (SVM), a machine learning (ML) technique deeply rooted in statistics, maximizes linear margins between classes and gen...

متن کامل

Learning Linearly Separable Languages

For a finite alphabet A, we define a class of embeddings of A∗ into an infinite-dimensional feature space X and show that its finitely supported hyperplanes define regular languages. This suggests a general strategy for learning regular languages from positive and negative examples. We apply this strategy to the piecewise testable languages, presenting an embedding under which these are precise...

متن کامل

Local Deep Kernel Learning for Efficient Non-linear SVM Prediction

Our objective is to speed up non-linear SVM prediction while maintaining classification accuracy above an acceptable limit. We generalize Localized Multiple Kernel Learning so as to learn a tree-based primal feature embedding which is high dimensional and sparse. Primal based classification decouples prediction costs from the number of support vectors and our tree-structured features efficientl...

متن کامل

Learning Non-Linearly Separable Boolean Functions With Linear Threshold Unit Trees and Madaline-Style Networks

This paper investigates an algorithm for the construction of decisions trees comprised of linear threshold units and also presents a novel algorithm for the learning of nonlinearly separable boolean functions using Madalinestyle networks which are isomorphic to decision trees. The construction of such networks is discussed, and their performance in learning is compared with standard BackPropaga...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: International Journal of New Technology and Research

سال: 2018

ISSN: 2454-4116

DOI: 10.31871/ijntr.4.8.56